Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Deep explainable method for encrypted traffic classification
Jian CUI, Kailang MA, Yu SUN, Dou WANG, Junliang ZHOU
Journal of Computer Applications    2023, 43 (4): 1151-1159.   DOI: 10.11772/j.issn.1001-9081.2022030382
Abstract362)   HTML10)    PDF (3314KB)(161)       Save

Current deep learning models have achieved significant performance advantages over traditional machine learning methods in encrypted traffic classification tasks. However, due to inherent black-box characteristic of the deep learning model, users cannot know the mechanism of classification decisions made by the model. In order to enhance the credibility of the deep learning model while ensuring the classification accuracy, an explainable method for deep learning model of encrypted traffic classification was proposed, including prototype-based traffic-level active explanation and feature similarity saliency map based packet-level passive explanation. Firstly, the prototype-based Flow Prototype Network (FlowProtoNet) was used to automatically extract typical segments of traffic during training, namely, traffic prototypes. Secondly, the similarity between the tested traffic and each prototype was calculated during testing to realize the classification and the traceability explanation of the training set. Thirdly, in order to further improve the visual explainability, Gradient Similarity Saliency Map (Grad-SSM) method was proposed, in which the irrelevant regions of classification decision were filtered out by weighting feature map with gradient, and then the Earth Mover’s Distance (EMD) between the tested traffic and the prototype extracted by FlowProtoNet was calculated to obtain the similarity matrix achieving further focus on attention heatmap by comparing the test traffic and this prototype. On ISCX VPN-nonVPN dataset, the accuracy of the proposed method reaches 96.86%, which is similar with that of the inexplainable method. And FlowProtoNet can further provide classification basis by giving the similarity with the prototype. At the same time, the proposed method has stronger visual explainability and focuses more on the key packets in the traffic.

Table and Figures | Reference | Related Articles | Metrics
Multi-head attention memory network for short text sentiment classification
Yu DENG, Xiaoyu LI, Jian CUI, Qi LIU
Journal of Computer Applications    2021, 41 (11): 3132-3138.   DOI: 10.11772/j.issn.1001-9081.2021010040
Abstract550)   HTML34)    PDF (681KB)(377)       Save

With the development of social networks, it has important social value to analyze the sentiments of massive texts in the social networks. Different from ordinary text classification, short text sentiment classification needs to mine the implicit sentiment semantic features, so it is very difficult and challenging. In order to obtain short text sentiment semantic features at a higher level, a new Multi-head Attention Memory Network (MAMN) was proposed for sentiment classification of short texts. Firstly, n-gram feature information and Ordered Neurons Long Short-Term Memory (ON-LSTM) network were used to improve the multi-head self-attention mechanism to fully extract the internal relationship of the text context, so that the model was able obtain richer text feature information. Secondly, multi-head attention mechanism was adopted to optimize the multi-hop memory network structure, so as to expand the depth of the model and mine higher level contextual internal semantic relations at the same time. A large number of experiments were carried out on Movie Review dataset (MR), Stanford Sentiment Treebank (SST)-1 and SST-2 datasets. The experimental results show that compared with the baseline models based on Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) structure and some latest works, the proposed MAMN achieves the better classification results, and the importance of multi-hop structure in performance improvement is verified.

Table and Figures | Reference | Related Articles | Metrics